Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: delete useless buffered activation #1270

Merged
merged 2 commits into from
Jul 16, 2024

Conversation

Ther-nullptr
Copy link
Contributor

@Ther-nullptr Ther-nullptr commented Jul 5, 2024

For QLoRA models, we do not need to update the $\mathbf{W}$, so the buffered activation of $\mathbf{A}$ is useless. It is suggested not to save $\mathbf{A}$ in ctx to save the memory.

@TimDettmers
Copy link
Collaborator

TimDettmers commented Jul 16, 2024

Great catch! This is the bug that caused a lot of memory issues with QLoRA. I cannot believe I missed this bug for such a long time. This will very significantly reduce the memory footprint of QLoRA fine-tuning with larger batch sizes or longer sequence lengths. Thank you!

@Titus-von-Koeller Titus-von-Koeller merged commit 9e75374 into bitsandbytes-foundation:main Jul 16, 2024
8 of 21 checks passed
@Titus-von-Koeller
Copy link
Collaborator

Thank you so much for this extremely helpful contribution, @Ther-nullptr ! Saw your repo and this commit, really interesting! Please tag me if there's anything else we can help you with in that effort.

Release with this fix just went live!

@AaronZLT
Copy link

AaronZLT commented Aug 4, 2024

I'm quite new to qlora's repo, what is the $A$ and $B$ in this context? Are they from LoRA's $LoRA_A$ and $LoRA_B$? If yes why that we do not need to store the activation of $A$ since $A$ and $B$ is trainable?

matthewdouglas pushed a commit to matthewdouglas/bitsandbytes that referenced this pull request Oct 28, 2024
…oundation#1270)

* chore: delete useless buffered activation

* fix: fix bugs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants